skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Huang, Lixiao"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data. 
    more » « less
  2. Navigation is critical for everyday tasks but is especially important for urban search and rescue (USAR) contexts. Aside from successful navigation, individuals must also be able to effectively communicate spatial information. This study investigates how differences in spatial ability affected overall performance in a USAR task in a simulated Minecraft environment and the effectiveness of an individual’s ability to communicate their location verbally. Randomly selected participants were asked to rescue as many victims as possible in three 10-minute missions. Results showed that sense of direction may not predict the ability to communicate spatial information, and that the skill of processing spatial information may be distinct from the ability to communicate spatial information to others. We discuss the implications of these findings for teaming contexts that involve both processes. 
    more » « less
  3. Abstract Artificial social intelligence (ASI) agents have great potential to aid the success of individuals, human–human teams, and human–artificial intelligence teams. To develop helpful ASI agents, we created an urban search and rescue task environment in Minecraft to evaluate ASI agents’ ability to infer participants’ knowledge training conditions and predict participants’ next victim type to be rescued. We evaluated ASI agents’ capabilities in three ways: (a) comparison to ground truth—the actual knowledge training condition and participant actions; (b) comparison among different ASI agents; and (c) comparison to a human observer criterion, whose accuracy served as a reference point. The human observers and the ASI agents used video data and timestamped event messages from the testbed, respectively, to make inferences about the same participants and topic (knowledge training condition) and the same instances of participant actions (rescue of victims). Overall, ASI agents performed better than human observers in inferring knowledge training conditions and predicting actions. Refining the human criterion can guide the design and evaluation of ASI agents for complex task environments and team composition. 
    more » « less
  4. The decision process of engaging or disengaging automation has been termed reliance on automation, and it has been widely analyzed as a summary measure of automation usage rather than a dynamic measure. We provide a framework for defining temporal reliance dynamics and apply it to a data-set from a previous study. Our findings show that (1) the higher the reliability of an automated system, the larger the reliance over time; and (2) more workload created by the automation type does not significantly affect the operators’ reliance dynamics in high-reliability systems, but it does produce greater reliance in low-reliability systems. Furthermore, on average, operators with low performance make fewer decision changes and prefer to stick to their decision of using automation even if it is not performing well. Operators with high performance, on average, have a higher frequency of decision change, and therefore, their automation usage periods are shorter. 
    more » « less
  5. Risk has been a key factor influencing trust in Human-Automation interactions, though there is no unified tool to study its dynamics. We provide a framework for defining and assessing relative risk of automation usage through performance dynamics and apply this framework to a dataset from a previous study. Our approach allows us to explore how operators’ ability and different automation conditions impact the performance and relative risk dynamics. Our results on performance dynamics show that, on average, operators perform better (1) using automation that is more reliable and (2) using partial automation (more workload) than full automation (less workload). Our analysis of relative risk dynamics indicates that automation with higher reliability has higher relative risk dynamics. This suggests that operators are willing to take more risk for automation with higher reliability. Additionally, when the reliability of automation is lower, operators adapt their behavior to result in lower risk. 
    more » « less